3,112 research outputs found

    Genetic algorithms with DNN-based trainable crossover as an example of partial specialization of general search

    Full text link
    Universal induction relies on some general search procedure that is doomed to be inefficient. One possibility to achieve both generality and efficiency is to specialize this procedure w.r.t. any given narrow task. However, complete specialization that implies direct mapping from the task parameters to solutions (discriminative models) without search is not always possible. In this paper, partial specialization of general search is considered in the form of genetic algorithms (GAs) with a specialized crossover operator. We perform a feasibility study of this idea implementing such an operator in the form of a deep feedforward neural network. GAs with trainable crossover operators are compared with the result of complete specialization, which is also represented as a deep neural network. Experimental results show that specialized GAs can be more efficient than both general GAs and discriminative models.Comment: AGI 2017 procedding, The final publication is available at link.springer.co

    Graduate students navigating social-ecological research: insights from the Long-Term Ecological Research Network

    Get PDF
    Interdisciplinary, collaborative research capable of capturing the feedbacks between biophysical and social systems can improve the capacity for sustainable environmental decision making. Networks of researchers provide unique opportunities to foster social-ecological inquiry. Although insights into interdisciplinary research have been discussed elsewhere, they rarely address the role of networks and often come from the perspectives of more senior scientists. We have provided graduate student perspectives on interdisciplinary degree paths from within the Long-Term Ecological Research (LTER) Network. Focusing on data from a survey of graduate students in the LTER Network and four self-identified successful graduate student research experiences, we examined the importance of funding, pedagogy, research design and development, communication, networking, and culture and attitude to students pursuing social-ecological research. Through sharing insights from successful graduate student approaches to social-ecological research within the LTER Network, we hope to facilitate dialogue between students, faculty, and networks to improve training for interdisciplinary scientists

    A hybrid kinetic Monte Carlo method for simulating silicon films grown by plasma-enhanced chemical vapor deposition

    Get PDF
    We present a powerful kinetic Monte Carlo (KMC) algorithm that allows one to simulate the growth of nanocrystalline silicon by plasma enhanced chemical vapor deposition (PECVD) for film thicknesses as large as several hundreds of monolayers. Our method combines a standard n-fold KMC algorithm with an efficient Markovian random walk scheme accounting for the surface diffusive processes of the species involved in PECVD. These processes are extremely fast compared to chemical reactions, thus in a brute application of the KMC method more than 99% of the computational time is spent in monitoring them. Our method decouples the treatment of these events from the rest of the reactions in a systematic way, thereby dramatically increasing the efficiency of the corresponding KMC algorithm. It is also making use of a very rich kinetic model which includes 5 species (H, SiH3, SiH2, SiH, and Si 2H5) that participate in 29 reactions. We have applied the new method in simulations of silicon growth under several conditions (in particular, silane fraction in the gas mixture), including those usually realized in actual PECVD technologies. This has allowed us to directly compare against available experimental data for the growth rate, the mesoscale morphology, and the chemical composition of the deposited film as a function of dilution ratio.open1

    Classical model of elementary particle with Bertotti-Robinson core and extremal black holes

    Full text link
    We discuss the question, whether the Reissner-Nordstr\"{o}m RN) metric can be glued to another solutions of Einstein-Maxwell equations in such a way that (i) the singularity at r=0 typical of the RN metric is removed (ii), matching is smooth. Such a construction could be viewed as a classical model of an elementary particle balanced by its own forces without support by an external agent. One choice is the Minkowski interior that goes back to the old Vilenkin and Fomin's idea who claimed that in this case the bare delta-like stresses at the horizon vanish if the RN metric is extremal. However, the relevant entity here is the integral of these stresses over the proper distance which is infinite in the extremal case. As a result of the competition of these two factors, the Lanczos tensor does not vanish and the extremal RN cannot be glued to the Minkowski metric smoothly, so the elementary-particle model as a ball empty inside fails. We examine the alternative possibility for the extremal RN metric - gluing to the Bertotti-Robinson (BR) metric. For a surface placed outside the horizon there always exist bare stresses but their amplitude goes to zero as the radius of the shell approaches that of the horizon. This limit realizes the Wheeler idea of "mass without mass" and "charge without charge". We generalize the model to the extremal Kerr-Newman metric glued to the rotating analog of the BR metric.Comment: 23 pages. Misprints correcte

    Compact x-ray source based on burst-mode inverse Compton scattering at 100 kHz

    Get PDF
    A design for a compact x-ray light source (CXLS) with flux and brilliance orders of magnitude beyond existing laboratory scale sources is presented. The source is based on inverse Compton scattering of a high brightness electron bunch on a picosecond laser pulse. The accelerator is a novel high-efficiency standing-wave linac and RF photoinjector powered by a single ultrastable RF transmitter at x-band RF frequency. The high efficiency permits operation at repetition rates up to 1 kHz, which is further boosted to 100 kHz by operating with trains of 100 bunches of 100 pC charge, each separated by 5 ns. The entire accelerator is approximately 1 meter long and produces hard x-rays tunable over a wide range of photon energies. The colliding laser is a Yb:YAG solid-state amplifier producing 1030 nm, 100 mJ pulses at the same 1 kHz repetition rate as the accelerator. The laser pulse is frequency-doubled and stored for many passes in a ringdown cavity to match the linac pulse structure. At a photon energy of 12.4 keV, the predicted x-ray flux is 5×10115 \times 10^{11} photons/second in a 5% bandwidth and the brilliance is 2×1012photons/(sec mm2 mrad2 0.1%)2 \times 10^{12}\mathrm{photons/(sec\ mm^2\ mrad^2\ 0.1\%)} in pulses with RMS pulse length of 490 fs. The nominal electron beam parameters are 18 MeV kinetic energy, 10 microamp average current, 0.5 microsecond macropulse length, resulting in average electron beam power of 180 W. Optimization of the x-ray output is presented along with design of the accelerator, laser, and x-ray optic components that are specific to the particular characteristics of the Compton scattered x-ray pulses.Comment: 25 pages, 24 figures, 54 reference

    EFFICIENT MODULAR IMPLEMENTATION OF BRANCH-AND-BOUND ALGORITHMS *

    Full text link
    This paper demonstrates how branch-and-bound algorithms can be modularized to obtain implementation efficiencies. For the manager, this advantage can be used to obtain faster implementation of algorithm results; for the scientist, it allows efficiencies in the construction of similar algorithms with different search and addressing structures for the purpose of testing to find a preferred algorithm. The demonstration in part is achieved by showing how the computer code of a central module of logic can be transported between different algorithms that have the same search strategy. Modularizations of three common searches (the best-bound search and two variants of the last-in-first-out search) with two addressing methods are detailed and contrasted. Using four assembly line balancing algorithms as examples, modularization is demonstrated and the search and addressing methods are contrasted. The application potential of modularization is broad and includes linear programming-based integer programming. Benefits and disadvantages of modularization are discussed. Computational results demonstrate the viability of the method.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/75538/1/j.1540-5915.1988.tb00251.x.pd

    Expression of Human α2-Adrenergic Receptors in Adipose Tissue of ÎČ3-Adrenergic Receptor-deficient Mice Promotes Diet-induced Obesity

    Get PDF
    Catecholamines play an important role in controlling white adipose tissue function and development. ÎČ- and α2-adrenergic receptors (ARs) couple positively and negatively, respectively, to adenylyl cyclase and are co-expressed in human adipocytes. Previous studies have demonstrated increased adipocyte α2/ÎČ-AR balance in obesity, and it has been proposed that increased α2-ARs in adipose tissue with or without decreased ÎČ-ARs may contribute mechanistically to the development of increased fat mass. To critically test this hypothesis, adipocyte α2/ÎČ-AR balance was genetically manipulated in mice. Human α2A-ARs were transgenically expressed in the adipose tissue of mice that were either homozygous (−/−) or heterozygous (+/−) for a disrupted ÎČ3-AR allele. Mice expressing α2-ARs in fat, in the absence of ÎČ3-ARs (ÎČ3-AR −/− background), developed high fat diet-induced obesity. Strikingly, this effect was due entirely to adipocyte hyperplasia and required the presence of α2-ARs, the absence of ÎČ3-ARs, and a high fat diet. Of note, obese α2-transgenic, ÎČ3 −/− mice failed to develop insulin resistance, which may reflect the fact that expanded fat mass was due to adipocyte hyperplasia and not adipocyte hypertrophy. In summary, we have demonstrated that increased α2/ÎČ-AR balance in adipocytes promotes obesity by stimulating adipocyte hyperplasia. This study also demonstrates one way in which two genes (α2 and ÎČ3-AR) and diet interact to influence fat mass

    Statistical features of edge turbulence in RFX-mod from Gas Puffing Imaging

    Get PDF
    Plasma density fluctuations in the edge plasma of the RFX-mod device are measured through the Gas Puffing Imaging Diagnostics. Statistical features of the signal are quantified in terms of the Probability Distribution Function (PDF), and computed for several kinds of discharges. The PDFs from discharges without particular control methods are found to be adequately described by a Gamma function, consistently with the recent results by Graves et al [J.P. Graves, et al, Plasma Phys. Control. Fusion 47, L1 (2005)]. On the other hand, pulses with external methods for plasma control feature modified PDFs. A first empirical analysis suggests that they may be interpolated through a linear combination of simple functions. An inspection of the literature shows that this kind of PDFs is common to other devices as well, and has been suggested to be due to the simultaneous presence of different mechanisms driving respectively coherent bursts and gaussian background turbulence. An attempt is made to relate differences in the PDFs to plasma conditions such as the local shift of the plasma column. A simple phenomenological model to interpret the nature of the PDF and assign a meaning to its parameters is also developed.Comment: 27 pages. Published in PPC

    CLASH-VLT: The mass, velocity-anisotropy, and pseudo-phase-space density profiles of the z=0.44 galaxy cluster MACS 1206.2-0847

    Get PDF
    We use an unprecedented data-set of about 600 redshifts for cluster members, obtained as part of a VLT/VIMOS large programme, to constrain the mass profile of the z=0.44 cluster MACS J1206.2-0847 over the radial range 0-5 Mpc (0-2.5 virial radii) using the MAMPOSSt and Caustic methods. We then add external constraints from our previous gravitational lensing analysis. We invert the Jeans equation to obtain the velocity-anisotropy profiles of cluster members. With the mass-density and velocity-anisotropy profiles we then obtain the first determination of a cluster pseudo-phase-space density profile. The kinematics and lensing determinations of the cluster mass profile are in excellent agreement. This is very well fitted by a NFW model with mass M200=(1.4 +- 0.2) 10^15 Msun and concentration c200=6 +- 1, only slightly higher than theoretical expectations. Other mass profile models also provide acceptable fits to our data, of (slightly) lower (Burkert, Hernquist, and Softened Isothermal Sphere) or comparable (Einasto) quality than NFW. The velocity anisotropy profiles of the passive and star-forming cluster members are similar, close to isotropic near the center and increasingly radial outside. Passive cluster members follow extremely well the theoretical expectations for the pseudo-phase-space density profile and the relation between the slope of the mass-density profile and the velocity anisotropy. Star-forming cluster members show marginal deviations from theoretical expectations. This is the most accurate determination of a cluster mass profile out to a radius of 5 Mpc, and the only determination of the velocity-anisotropy and pseudo-phase-space density profiles of both passive and star-forming galaxies for an individual cluster [abridged]Comment: A&A in press; 22 pages, 19 figure

    An improved constraint satisfaction adaptive neural network for job-shop scheduling

    Get PDF
    Copyright @ Springer Science + Business Media, LLC 2009This paper presents an improved constraint satisfaction adaptive neural network for job-shop scheduling problems. The neural network is constructed based on the constraint conditions of a job-shop scheduling problem. Its structure and neuron connections can change adaptively according to the real-time constraint satisfaction situations that arise during the solving process. Several heuristics are also integrated within the neural network to enhance its convergence, accelerate its convergence, and improve the quality of the solutions produced. An experimental study based on a set of benchmark job-shop scheduling problems shows that the improved constraint satisfaction adaptive neural network outperforms the original constraint satisfaction adaptive neural network in terms of computational time and the quality of schedules it produces. The neural network approach is also experimentally validated to outperform three classical heuristic algorithms that are widely used as the basis of many state-of-the-art scheduling systems. Hence, it may also be used to construct advanced job-shop scheduling systems.This work was supported in part by the Engineering and Physical Sciences Research Council (EPSRC) of UK under Grant EP/E060722/01 and in part by the National Nature Science Fundation of China under Grant 60821063 and National Basic Research Program of China under Grant 2009CB320601
    • 

    corecore